Serveur d'exploration Santé et pratique musicale

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Effects of Visual and Auditory Presentation Styles and Musical Elements on Working Memory as Measured by Monosyllabic Sequential Digit Recall.

Identifieur interne : 000570 ( Main/Exploration ); précédent : 000569; suivant : 000571

Effects of Visual and Auditory Presentation Styles and Musical Elements on Working Memory as Measured by Monosyllabic Sequential Digit Recall.

Auteurs : Michael J. Silverman ; Edward T. Schwartzberg [États-Unis]

Source :

RBID : pubmed:29890898

Descripteurs français

English descriptors

Abstract

Information is often paired with music to facilitate memory and learning. However, there is a lack of basic research investigating how visual and auditory presentation styles and musical elements might facilitate recall. The purpose of this study is to isolate and determine the effects of visual and auditory presentation styles and musical elements on working memory as measured by sequential monosyllabic digit recall performance. Recall was tested on 60 undergraduate university students during six different conditions: (a) Visual + Auditory Chant, (b) Visual + Auditory Melody, (c) Visual + Auditory Speech, (d) Auditory Chant, (e) Auditory Melody, and (f) Auditory Speech. There was a significant interaction between presentation style and musical element conditions. There were significant differences between auditory and visual + auditory conditions in the melody and speech conditions but not in the chant condition. In all cases, the auditory condition had more accurate recall than the visual + auditory condition, with recall differences largest during the speech condition. There was no significant difference between chant and melody but significant differences between chant and speech and melody and speech in the visual + auditory condition. In the auditory condition, recall accuracy was lower for speech than for melody or chant. There was no significant difference between chant and melody, chant and speech, or melody and speech in the visual + auditory condition. Congruent with existing research, the addition of visual input likely overloaded working memory resulting in worse recall. Implications for clinical practice, limitations, and suggestions for future research are provided.

DOI: 10.1177/0033294118781937
PubMed: 29890898


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Effects of Visual and Auditory Presentation Styles and Musical Elements on Working Memory as Measured by Monosyllabic Sequential Digit Recall.</title>
<author>
<name sortKey="Silverman, Michael J" sort="Silverman, Michael J" uniqKey="Silverman M" first="Michael J" last="Silverman">Michael J. Silverman</name>
</author>
<author>
<name sortKey="Schwartzberg, Edward T" sort="Schwartzberg, Edward T" uniqKey="Schwartzberg E" first="Edward T" last="Schwartzberg">Edward T. Schwartzberg</name>
<affiliation wicri:level="2">
<nlm:affiliation>School of Music, University of Minnesota, Minneapolis, MN, USA.</nlm:affiliation>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>School of Music, University of Minnesota, Minneapolis, MN</wicri:regionArea>
<placeName>
<region type="state">Minnesota</region>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2019">2019</date>
<idno type="RBID">pubmed:29890898</idno>
<idno type="pmid">29890898</idno>
<idno type="doi">10.1177/0033294118781937</idno>
<idno type="wicri:Area/Main/Corpus">000758</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Corpus" wicri:corpus="PubMed">000758</idno>
<idno type="wicri:Area/Main/Curation">000758</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Curation">000758</idno>
<idno type="wicri:Area/Main/Exploration">000758</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Effects of Visual and Auditory Presentation Styles and Musical Elements on Working Memory as Measured by Monosyllabic Sequential Digit Recall.</title>
<author>
<name sortKey="Silverman, Michael J" sort="Silverman, Michael J" uniqKey="Silverman M" first="Michael J" last="Silverman">Michael J. Silverman</name>
</author>
<author>
<name sortKey="Schwartzberg, Edward T" sort="Schwartzberg, Edward T" uniqKey="Schwartzberg E" first="Edward T" last="Schwartzberg">Edward T. Schwartzberg</name>
<affiliation wicri:level="2">
<nlm:affiliation>School of Music, University of Minnesota, Minneapolis, MN, USA.</nlm:affiliation>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>School of Music, University of Minnesota, Minneapolis, MN</wicri:regionArea>
<placeName>
<region type="state">Minnesota</region>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Psychological reports</title>
<idno type="eISSN">1558-691X</idno>
<imprint>
<date when="2019" type="published">2019</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Auditory Perception (physiology)</term>
<term>Female (MeSH)</term>
<term>Humans (MeSH)</term>
<term>Male (MeSH)</term>
<term>Memory, Short-Term (physiology)</term>
<term>Music (psychology)</term>
<term>Neuropsychological Tests (MeSH)</term>
<term>Visual Perception (physiology)</term>
</keywords>
<keywords scheme="KwdFr" xml:lang="fr">
<term>Femelle (MeSH)</term>
<term>Humains (MeSH)</term>
<term>Musique (psychologie)</term>
<term>Mâle (MeSH)</term>
<term>Mémoire à court terme (physiologie)</term>
<term>Perception auditive (physiologie)</term>
<term>Perception visuelle (physiologie)</term>
<term>Tests neuropsychologiques (MeSH)</term>
</keywords>
<keywords scheme="MESH" qualifier="physiologie" xml:lang="fr">
<term>Mémoire à court terme</term>
<term>Perception auditive</term>
<term>Perception visuelle</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en">
<term>Auditory Perception</term>
<term>Memory, Short-Term</term>
<term>Visual Perception</term>
</keywords>
<keywords scheme="MESH" qualifier="psychologie" xml:lang="fr">
<term>Musique</term>
</keywords>
<keywords scheme="MESH" qualifier="psychology" xml:lang="en">
<term>Music</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Female</term>
<term>Humans</term>
<term>Male</term>
<term>Neuropsychological Tests</term>
</keywords>
<keywords scheme="MESH" xml:lang="fr">
<term>Femelle</term>
<term>Humains</term>
<term>Mâle</term>
<term>Tests neuropsychologiques</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Information is often paired with music to facilitate memory and learning. However, there is a lack of basic research investigating how visual and auditory presentation styles and musical elements might facilitate recall. The purpose of this study is to isolate and determine the effects of visual and auditory presentation styles and musical elements on working memory as measured by sequential monosyllabic digit recall performance. Recall was tested on 60 undergraduate university students during six different conditions: (a) Visual + Auditory Chant, (b) Visual + Auditory Melody, (c) Visual + Auditory Speech, (d) Auditory Chant, (e) Auditory Melody, and (f) Auditory Speech. There was a significant interaction between presentation style and musical element conditions. There were significant differences between auditory and visual + auditory conditions in the melody and speech conditions but not in the chant condition. In all cases, the auditory condition had more accurate recall than the visual + auditory condition, with recall differences largest during the speech condition. There was no significant difference between chant and melody but significant differences between chant and speech and melody and speech in the visual + auditory condition. In the auditory condition, recall accuracy was lower for speech than for melody or chant. There was no significant difference between chant and melody, chant and speech, or melody and speech in the visual + auditory condition. Congruent with existing research, the addition of visual input likely overloaded working memory resulting in worse recall. Implications for clinical practice, limitations, and suggestions for future research are provided.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Status="MEDLINE" Owner="NLM">
<PMID Version="1">29890898</PMID>
<DateCompleted>
<Year>2020</Year>
<Month>01</Month>
<Day>03</Day>
</DateCompleted>
<DateRevised>
<Year>2020</Year>
<Month>01</Month>
<Day>03</Day>
</DateRevised>
<Article PubModel="Print-Electronic">
<Journal>
<ISSN IssnType="Electronic">1558-691X</ISSN>
<JournalIssue CitedMedium="Internet">
<Volume>122</Volume>
<Issue>4</Issue>
<PubDate>
<Year>2019</Year>
<Month>Aug</Month>
</PubDate>
</JournalIssue>
<Title>Psychological reports</Title>
<ISOAbbreviation>Psychol Rep</ISOAbbreviation>
</Journal>
<ArticleTitle>Effects of Visual and Auditory Presentation Styles and Musical Elements on Working Memory as Measured by Monosyllabic Sequential Digit Recall.</ArticleTitle>
<Pagination>
<MedlinePgn>1297-1312</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.1177/0033294118781937</ELocationID>
<Abstract>
<AbstractText>Information is often paired with music to facilitate memory and learning. However, there is a lack of basic research investigating how visual and auditory presentation styles and musical elements might facilitate recall. The purpose of this study is to isolate and determine the effects of visual and auditory presentation styles and musical elements on working memory as measured by sequential monosyllabic digit recall performance. Recall was tested on 60 undergraduate university students during six different conditions: (a) Visual + Auditory Chant, (b) Visual + Auditory Melody, (c) Visual + Auditory Speech, (d) Auditory Chant, (e) Auditory Melody, and (f) Auditory Speech. There was a significant interaction between presentation style and musical element conditions. There were significant differences between auditory and visual + auditory conditions in the melody and speech conditions but not in the chant condition. In all cases, the auditory condition had more accurate recall than the visual + auditory condition, with recall differences largest during the speech condition. There was no significant difference between chant and melody but significant differences between chant and speech and melody and speech in the visual + auditory condition. In the auditory condition, recall accuracy was lower for speech than for melody or chant. There was no significant difference between chant and melody, chant and speech, or melody and speech in the visual + auditory condition. Congruent with existing research, the addition of visual input likely overloaded working memory resulting in worse recall. Implications for clinical practice, limitations, and suggestions for future research are provided.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Silverman</LastName>
<ForeName>Michael J</ForeName>
<Initials>MJ</Initials>
</Author>
<Author ValidYN="Y">
<LastName>Schwartzberg</LastName>
<ForeName>Edward T</ForeName>
<Initials>ET</Initials>
<AffiliationInfo>
<Affiliation>School of Music, University of Minnesota, Minneapolis, MN, USA.</Affiliation>
</AffiliationInfo>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic">
<Year>2018</Year>
<Month>06</Month>
<Day>11</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo>
<Country>United States</Country>
<MedlineTA>Psychol Rep</MedlineTA>
<NlmUniqueID>0376475</NlmUniqueID>
<ISSNLinking>0033-2941</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName UI="D001307" MajorTopicYN="N">Auditory Perception</DescriptorName>
<QualifierName UI="Q000502" MajorTopicYN="Y">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D005260" MajorTopicYN="N">Female</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D006801" MajorTopicYN="N">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D008297" MajorTopicYN="N">Male</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D008570" MajorTopicYN="N">Memory, Short-Term</DescriptorName>
<QualifierName UI="Q000502" MajorTopicYN="Y">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D009146" MajorTopicYN="N">Music</DescriptorName>
<QualifierName UI="Q000523" MajorTopicYN="Y">psychology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D009483" MajorTopicYN="N">Neuropsychological Tests</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D014796" MajorTopicYN="N">Visual Perception</DescriptorName>
<QualifierName UI="Q000502" MajorTopicYN="Y">physiology</QualifierName>
</MeshHeading>
</MeshHeadingList>
<KeywordList Owner="NOTNLM">
<Keyword MajorTopicYN="N">Presentation style</Keyword>
<Keyword MajorTopicYN="N">auditory and visual recall</Keyword>
<Keyword MajorTopicYN="N">music</Keyword>
<Keyword MajorTopicYN="N">working memory</Keyword>
</KeywordList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="pubmed">
<Year>2018</Year>
<Month>6</Month>
<Day>13</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2020</Year>
<Month>1</Month>
<Day>4</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2018</Year>
<Month>6</Month>
<Day>13</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">29890898</ArticleId>
<ArticleId IdType="doi">10.1177/0033294118781937</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
<region>
<li>Minnesota</li>
</region>
</list>
<tree>
<noCountry>
<name sortKey="Silverman, Michael J" sort="Silverman, Michael J" uniqKey="Silverman M" first="Michael J" last="Silverman">Michael J. Silverman</name>
</noCountry>
<country name="États-Unis">
<region name="Minnesota">
<name sortKey="Schwartzberg, Edward T" sort="Schwartzberg, Edward T" uniqKey="Schwartzberg E" first="Edward T" last="Schwartzberg">Edward T. Schwartzberg</name>
</region>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Sante/explor/SanteMusiqueV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000570 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000570 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Sante
   |area=    SanteMusiqueV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     pubmed:29890898
   |texte=   Effects of Visual and Auditory Presentation Styles and Musical Elements on Working Memory as Measured by Monosyllabic Sequential Digit Recall.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Main/Exploration/RBID.i   -Sk "pubmed:29890898" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd   \
       | NlmPubMed2Wicri -a SanteMusiqueV1 

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Mon Mar 8 15:23:44 2021. Site generation: Mon Mar 8 15:23:58 2021